10 research outputs found
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
Deep Detail Enhancement for Any Garment
Creating fine garment details requires significant efforts and huge
computational resources. In contrast, a coarse shape may be easy to acquire in
many scenarios (e.g., via low-resolution physically-based simulation, linear
blend skinning driven by skeletal motion, portable scanners). In this paper, we
show how to enhance, in a data-driven manner, rich yet plausible details
starting from a coarse garment geometry. Once the parameterization of the
garment is given, we formulate the task as a style transfer problem over the
space of associated normal maps. In order to facilitate generalization across
garment types and character motions, we introduce a patch-based formulation,
that produces high-resolution details by matching a Gram matrix based style
loss, to hallucinate geometric details (i.e., wrinkle density and shape). We
extensively evaluate our method on a variety of production scenarios and show
that our method is simple, light-weight, efficient, and generalizes across
underlying garment types, sewing patterns, and body motion.Comment: 12 page
Dance In the Wild: Monocular Human Animation with Neural Dynamic Appearance Synthesis
Synthesizing dynamic appearances of humans in motion plays a central role in applications such as ARWR and video editing. While many recent methods have been proposed to tackle this problem,handling loose garments with complex textures and high dynamic motion still remains challenging. In this paper,we propose a video based appearance synthesis method that tackles such challenges and demonstrates high quality results for in-the-wild videos that have not been shown before. Specifically,we adopt a StyleGAN based architecture to the task of person specific video based motion retargeting. We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes as well as regularizing the single frame based pose estimates to improve temporal coherency. We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the-art performance both qualitatively and quantitatively
Dynamic Neural Garments
A vital task of the wider digital human effort is the creation of realistic
garments on digital avatars, both in the form of characteristic fold patterns
and wrinkles in static frames as well as richness of garment dynamics under
avatars' motion. Existing workflow of modeling, simulation, and rendering
closely replicates the physics behind real garments, but is tedious and
requires repeating most of the workflow under changes to characters' motion,
camera angle, or garment resizing. Although data-driven solutions exist, they
either focus on static scenarios or only handle dynamics of tight garments. We
present a solution that, at test time, takes in body joint motion to directly
produce realistic dynamic garment image sequences. Specifically, given the
target joint motion sequence of an avatar, we propose dynamic neural garments
to jointly simulate and render plausible dynamic garment appearance from an
unseen viewpoint. Technically, our solution generates a coarse garment proxy
sequence, learns deep dynamic features attached to this template, and neurally
renders the features to produce appearance changes such as folds, wrinkles, and
silhouettes. We demonstrate generalization behavior to both unseen motion and
unseen camera views. Further, our network can be fine-tuned to adopt to new
body shape and/or background images. We also provide comparisons against
existing neural rendering and image sequence translation approaches, and report
clear quantitative improvements.Comment: 13 page
Electrospun Fe2C-loaded carbon nanofibers as efficient electrocatalysts for oxygen reduction reaction
Carbon-based non-precious metal catalysts have been regarded as the most promising alternatives to the state-of-art Pt/C catalyst for the oxygen reduction reaction (ORR). However, there are still some unresolved challenges such as agglomeration of nanoparticles, complex preparation process and low production efficiency, which severely hamper the large-scale production of non-precious metal catalysts. Herein, a novel carbon-based non-precious metal catalyst, i.e. iron carbide nanoparticles embedded on carbon nanofibers (Fe2C/CNFs), prepared via the direct pyrolysis of carbon- and iron-containing Janus fibrous precursors obtained by electrospinning. The Fe2C/CNF catalyst shows uniform dispersion and narrow size distribution of Fe2C nanoparticles embedded on the CNFs. The obtained catalyst exhibits positive onset potential (0.87 V versus RHE), large kinetic current density (1.9 mA cm−2), and nearly follows the effective four-electron route, suggesting an outstanding electrocatalytic activity for the ORR in 0.1 M of KOH solution. Besides, its stability is better than that of the commercial Pt/C catalyst, due to the strong binding force between Fe2C particles and CNFs. This strategy opens new avenues for the design and efficient production of promising electrocatalysts for the ORR